Patch "nvme-rdma: unquiesce admin_q before destroy it" has been added to the 6.12-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    nvme-rdma: unquiesce admin_q before destroy it

to the 6.12-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     nvme-rdma-unquiesce-admin_q-before-destroy-it.patch
and it can be found in the queue-6.12 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit ee62c5d4cda118454eb7677944229c6b76cd766f
Author: Chunguang.xu <chunguang.xu@xxxxxxxxxx>
Date:   Tue Dec 3 11:34:41 2024 +0800

    nvme-rdma: unquiesce admin_q before destroy it
    
    [ Upstream commit 5858b687559809f05393af745cbadf06dee61295 ]
    
    Kernel will hang on destroy admin_q while we create ctrl failed, such
    as following calltrace:
    
    PID: 23644    TASK: ff2d52b40f439fc0  CPU: 2    COMMAND: "nvme"
     #0 [ff61d23de260fb78] __schedule at ffffffff8323bc15
     #1 [ff61d23de260fc08] schedule at ffffffff8323c014
     #2 [ff61d23de260fc28] blk_mq_freeze_queue_wait at ffffffff82a3dba1
     #3 [ff61d23de260fc78] blk_freeze_queue at ffffffff82a4113a
     #4 [ff61d23de260fc90] blk_cleanup_queue at ffffffff82a33006
     #5 [ff61d23de260fcb0] nvme_rdma_destroy_admin_queue at ffffffffc12686ce
     #6 [ff61d23de260fcc8] nvme_rdma_setup_ctrl at ffffffffc1268ced
     #7 [ff61d23de260fd28] nvme_rdma_create_ctrl at ffffffffc126919b
     #8 [ff61d23de260fd68] nvmf_dev_write at ffffffffc024f362
     #9 [ff61d23de260fe38] vfs_write at ffffffff827d5f25
        RIP: 00007fda7891d574  RSP: 00007ffe2ef06958  RFLAGS: 00000202
        RAX: ffffffffffffffda  RBX: 000055e8122a4d90  RCX: 00007fda7891d574
        RDX: 000000000000012b  RSI: 000055e8122a4d90  RDI: 0000000000000004
        RBP: 00007ffe2ef079c0   R8: 000000000000012b   R9: 000055e8122a4d90
        R10: 0000000000000000  R11: 0000000000000202  R12: 0000000000000004
        R13: 000055e8122923c0  R14: 000000000000012b  R15: 00007fda78a54500
        ORIG_RAX: 0000000000000001  CS: 0033  SS: 002b
    
    This due to we have quiesced admi_q before cancel requests, but forgot
    to unquiesce before destroy it, as a result we fail to drain the
    pending requests, and hang on blk_mq_freeze_queue_wait() forever. Here
    try to reuse nvme_rdma_teardown_admin_queue() to fix this issue and
    simplify the code.
    
    Fixes: 958dc1d32c80 ("nvme-rdma: add clean action for failed reconnection")
    Reported-by: Yingfu.zhou <yingfu.zhou@xxxxxxxxxx>
    Signed-off-by: Chunguang.xu <chunguang.xu@xxxxxxxxxx>
    Signed-off-by: Yue.zhao <yue.zhao@xxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Reviewed-by: Hannes Reinecke <hare@xxxxxxx>
    Signed-off-by: Keith Busch <kbusch@xxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 24a2759798d01..913e6e5a80705 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1091,13 +1091,7 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
 	}
 destroy_admin:
 	nvme_stop_keep_alive(&ctrl->ctrl);
-	nvme_quiesce_admin_queue(&ctrl->ctrl);
-	blk_sync_queue(ctrl->ctrl.admin_q);
-	nvme_rdma_stop_queue(&ctrl->queues[0]);
-	nvme_cancel_admin_tagset(&ctrl->ctrl);
-	if (new)
-		nvme_remove_admin_tag_set(&ctrl->ctrl);
-	nvme_rdma_destroy_admin_queue(ctrl);
+	nvme_rdma_teardown_admin_queue(ctrl, new);
 	return ret;
 }
 




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux