Patch "interconnect: Don't access req_list while it's being manipulated" has been added to the 6.6-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    interconnect: Don't access req_list while it's being manipulated

to the 6.6-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     interconnect-don-t-access-req_list-while-it-s-being-.patch
and it can be found in the queue-6.6 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit a94dd5aee55f39a2f6df40f401fc583c5a7e27ed
Author: Mike Tipton <quic_mdtipton@xxxxxxxxxxx>
Date:   Tue Mar 5 14:56:52 2024 -0800

    interconnect: Don't access req_list while it's being manipulated
    
    [ Upstream commit de1bf25b6d771abdb52d43546cf57ad775fb68a1 ]
    
    The icc_lock mutex was split into separate icc_lock and icc_bw_lock
    mutexes in [1] to avoid lockdep splats. However, this didn't adequately
    protect access to icc_node::req_list.
    
    The icc_set_bw() function will eventually iterate over req_list while
    only holding icc_bw_lock, but req_list can be modified while only
    holding icc_lock. This causes races between icc_set_bw(), of_icc_get(),
    and icc_put().
    
    Example A:
    
      CPU0                               CPU1
      ----                               ----
      icc_set_bw(path_a)
        mutex_lock(&icc_bw_lock);
                                         icc_put(path_b)
                                           mutex_lock(&icc_lock);
        aggregate_requests()
          hlist_for_each_entry(r, ...
                                           hlist_del(...
            <r = invalid pointer>
    
    Example B:
    
      CPU0                               CPU1
      ----                               ----
      icc_set_bw(path_a)
        mutex_lock(&icc_bw_lock);
                                         path_b = of_icc_get()
                                           of_icc_get_by_index()
                                             mutex_lock(&icc_lock);
                                             path_find()
                                               path_init()
        aggregate_requests()
          hlist_for_each_entry(r, ...
                                                 hlist_add_head(...
            <r = invalid pointer>
    
    Fix this by ensuring icc_bw_lock is always held before manipulating
    icc_node::req_list. The additional places icc_bw_lock is held don't
    perform any memory allocations, so we should still be safe from the
    original lockdep splats that motivated the separate locks.
    
    [1] commit af42269c3523 ("interconnect: Fix locking for runpm vs reclaim")
    
    Signed-off-by: Mike Tipton <quic_mdtipton@xxxxxxxxxxx>
    Fixes: af42269c3523 ("interconnect: Fix locking for runpm vs reclaim")
    Reviewed-by: Rob Clark <robdclark@xxxxxxxxxxxx>
    Link: https://lore.kernel.org/r/20240305225652.22872-1-quic_mdtipton@xxxxxxxxxxx
    Signed-off-by: Georgi Djakov <djakov@xxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
index 50bac2d79d9b5..68edb07d4443e 100644
--- a/drivers/interconnect/core.c
+++ b/drivers/interconnect/core.c
@@ -176,6 +176,8 @@ static struct icc_path *path_init(struct device *dev, struct icc_node *dst,
 
 	path->num_nodes = num_nodes;
 
+	mutex_lock(&icc_bw_lock);
+
 	for (i = num_nodes - 1; i >= 0; i--) {
 		node->provider->users++;
 		hlist_add_head(&path->reqs[i].req_node, &node->req_list);
@@ -186,6 +188,8 @@ static struct icc_path *path_init(struct device *dev, struct icc_node *dst,
 		node = node->reverse;
 	}
 
+	mutex_unlock(&icc_bw_lock);
+
 	return path;
 }
 
@@ -792,12 +796,16 @@ void icc_put(struct icc_path *path)
 		pr_err("%s: error (%d)\n", __func__, ret);
 
 	mutex_lock(&icc_lock);
+	mutex_lock(&icc_bw_lock);
+
 	for (i = 0; i < path->num_nodes; i++) {
 		node = path->reqs[i].node;
 		hlist_del(&path->reqs[i].req_node);
 		if (!WARN_ON(!node->provider->users))
 			node->provider->users--;
 	}
+
+	mutex_unlock(&icc_bw_lock);
 	mutex_unlock(&icc_lock);
 
 	kfree_const(path->name);




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux