Patch "rhashtable: Fix potential deadlock by moving schedule_work outside lock" has been added to the 6.13-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    rhashtable: Fix potential deadlock by moving schedule_work outside lock

to the 6.13-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     rhashtable-fix-potential-deadlock-by-moving-schedule.patch
and it can be found in the queue-6.13 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 1c00d7e5083bf22000798004a3b1602ec26bb8ef
Author: Breno Leitao <leitao@xxxxxxxxxx>
Date:   Thu Nov 28 04:16:25 2024 -0800

    rhashtable: Fix potential deadlock by moving schedule_work outside lock
    
    [ Upstream commit e1d3422c95f003eba241c176adfe593c33e8a8f6 ]
    
    Move the hash table growth check and work scheduling outside the
    rht lock to prevent a possible circular locking dependency.
    
    The original implementation could trigger a lockdep warning due to
    a potential deadlock scenario involving nested locks between
    rhashtable bucket, rq lock, and dsq lock. By relocating the
    growth check and work scheduling after releasing the rth lock, we break
    this potential deadlock chain.
    
    This change expands the flexibility of rhashtable by removing
    restrictive locking that previously limited its use in scheduler
    and workqueue contexts.
    
    Import to say that this calls rht_grow_above_75(), which reads from
    struct rhashtable without holding the lock, if this is a problem, we can
    move the check to the lock, and schedule the workqueue after the lock.
    
    Fixes: f0e1a0643a59 ("sched_ext: Implement BPF extensible scheduler class")
    Suggested-by: Tejun Heo <tj@xxxxxxxxxx>
    Signed-off-by: Breno Leitao <leitao@xxxxxxxxxx>
    
    Modified so that atomic_inc is also moved outside of the bucket
    lock along with the growth above 75% check.
    
    Signed-off-by: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 6c902639728b7..bf956b85455ab 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -584,10 +584,6 @@ static struct bucket_table *rhashtable_insert_one(
 	 */
 	rht_assign_locked(bkt, obj);
 
-	atomic_inc(&ht->nelems);
-	if (rht_grow_above_75(ht, tbl))
-		schedule_work(&ht->run_work);
-
 	return NULL;
 }
 
@@ -624,6 +620,12 @@ static void *rhashtable_try_insert(struct rhashtable *ht, const void *key,
 				data = ERR_CAST(new_tbl);
 
 			rht_unlock(tbl, bkt, flags);
+
+			if (PTR_ERR(data) == -ENOENT && !new_tbl) {
+				atomic_inc(&ht->nelems);
+				if (rht_grow_above_75(ht, tbl))
+					schedule_work(&ht->run_work);
+			}
 		}
 	} while (!IS_ERR_OR_NULL(new_tbl));
 




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux