Patch "nvmet: fix a possible leak when destroy a ctrl during qp establishment" has been added to the 6.9-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    nvmet: fix a possible leak when destroy a ctrl during qp establishment

to the 6.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     nvmet-fix-a-possible-leak-when-destroy-a-ctrl-during.patch
and it can be found in the queue-6.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 64549b91ee8f8294ae061ea9240fab14a205c37c
Author: Sagi Grimberg <sagi@xxxxxxxxxxx>
Date:   Mon May 27 22:38:52 2024 +0300

    nvmet: fix a possible leak when destroy a ctrl during qp establishment
    
    [ Upstream commit c758b77d4a0a0ed3a1292b3fd7a2aeccd1a169a4 ]
    
    In nvmet_sq_destroy we capture sq->ctrl early and if it is non-NULL we
    know that a ctrl was allocated (in the admin connect request handler)
    and we need to release pending AERs, clear ctrl->sqs and sq->ctrl
    (for nvme-loop primarily), and drop the final reference on the ctrl.
    
    However, a small window is possible where nvmet_sq_destroy starts (as
    a result of the client giving up and disconnecting) concurrently with
    the nvme admin connect cmd (which may be in an early stage). But *before*
    kill_and_confirm of sq->ref (i.e. the admin connect managed to get an sq
    live reference). In this case, sq->ctrl was allocated however after it was
    captured in a local variable in nvmet_sq_destroy.
    This prevented the final reference drop on the ctrl.
    
    Solve this by re-capturing the sq->ctrl after all inflight request has
    completed, where for sure sq->ctrl reference is final, and move forward
    based on that.
    
    This issue was observed in an environment with many hosts connecting
    multiple ctrls simoutanuosly, creating a delay in allocating a ctrl
    leading up to this race window.
    
    Reported-by: Alex Turin <alex@xxxxxxxxxxxx>
    Signed-off-by: Sagi Grimberg <sagi@xxxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Signed-off-by: Keith Busch <kbusch@xxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 2fde22323622e..06f0c587f3437 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -818,6 +818,15 @@ void nvmet_sq_destroy(struct nvmet_sq *sq)
 	percpu_ref_exit(&sq->ref);
 	nvmet_auth_sq_free(sq);
 
+	/*
+	 * we must reference the ctrl again after waiting for inflight IO
+	 * to complete. Because admin connect may have sneaked in after we
+	 * store sq->ctrl locally, but before we killed the percpu_ref. the
+	 * admin connect allocates and assigns sq->ctrl, which now needs a
+	 * final ref put, as this ctrl is going away.
+	 */
+	ctrl = sq->ctrl;
+
 	if (ctrl) {
 		/*
 		 * The teardown flow may take some time, and the host may not




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux