Patch "nvme-multipath: find NUMA path only for online numa-node" has been added to the 6.9-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    nvme-multipath: find NUMA path only for online numa-node

to the 6.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     nvme-multipath-find-numa-path-only-for-online-numa-n.patch
and it can be found in the queue-6.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit d4d5946c62aefef7f2c15b5a3dd84c21e0cc34fa
Author: Nilay Shroff <nilay@xxxxxxxxxxxxx>
Date:   Thu May 16 17:43:51 2024 +0530

    nvme-multipath: find NUMA path only for online numa-node
    
    [ Upstream commit d3a043733f25d743f3aa617c7f82dbcb5ee2211a ]
    
    In current native multipath design when a shared namespace is created,
    we loop through each possible numa-node, calculate the NUMA distance of
    that node from each nvme controller and then cache the optimal IO path
    for future reference while sending IO. The issue with this design is that
    we may refer to the NUMA distance table for an offline node which may not
    be populated at the time and so we may inadvertently end up finding and
    caching a non-optimal path for IO. Then latter when the corresponding
    numa-node becomes online and hence the NUMA distance table entry for that
    node is created, ideally we should re-calculate the multipath node distance
    for the newly added node however that doesn't happen unless we rescan/reset
    the controller. So essentially, we may keep using non-optimal IO path for a
    node which is made online after namespace is created.
    This patch helps fix this issue ensuring that when a shared namespace is
    created, we calculate the multipath node distance for each online numa-node
    instead of each possible numa-node. Then latter when a node becomes online
    and we receive any IO on that newly added node, we would calculate the
    multipath node distance for newly added node but this time NUMA distance
    table would have been already populated for newly added node. Hence we
    would be able to correctly calculate the multipath node distance and choose
    the optimal path for the IO.
    
    Signed-off-by: Nilay Shroff <nilay@xxxxxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Signed-off-by: Keith Busch <kbusch@xxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index a4e46eb20be63..1bee176fd850e 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -596,7 +596,7 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
 		int node, srcu_idx;
 
 		srcu_idx = srcu_read_lock(&head->srcu);
-		for_each_node(node)
+		for_each_online_node(node)
 			__nvme_find_path(head, node);
 		srcu_read_unlock(&head->srcu, srcu_idx);
 	}




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux