This is a note to let you know that I've just added the patch titled nvmet: fix nvme status code when namespace is disabled to the 6.8-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: nvmet-fix-nvme-status-code-when-namespace-is-disable.patch and it can be found in the queue-6.8 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit 60b7e0adb197fe47de884b14095203c759e5fbb2 Author: Sagi Grimberg <sagi@xxxxxxxxxxx> Date: Sun Apr 28 12:25:40 2024 +0300 nvmet: fix nvme status code when namespace is disabled [ Upstream commit 505363957fad35f7aed9a2b0d8dad73451a80fb5 ] If the user disabled a nvmet namespace, it is removed from the subsystem namespaces list. When nvmet processes a command directed to an nsid that was disabled, it cannot differentiate between a nsid that is disabled vs. a non-existent namespace, and resorts to return NVME_SC_INVALID_NS with the dnr bit set. This translates to a non-retryable status for the host, which translates to a user error. We should expect disabled namespaces to not cause an I/O error in a multipath environment. Address this by searching a configfs item for the namespace nvmet failed to find, and if we found one, conclude that the namespace is disabled (perhaps temporarily). Return NVME_SC_INTERNAL_PATH_ERROR in this case and keep DNR bit cleared. Reported-by: Jirong Feng <jirong.feng@xxxxxxxxxxxx> Tested-by: Jirong Feng <jirong.feng@xxxxxxxxxxxx> Signed-off-by: Sagi Grimberg <sagi@xxxxxxxxxxx> Reviewed-by: Christoph Hellwig <hch@xxxxxx> Signed-off-by: Keith Busch <kbusch@xxxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c index 2482a0db25043..b7bfee4b77a84 100644 --- a/drivers/nvme/target/configfs.c +++ b/drivers/nvme/target/configfs.c @@ -728,6 +728,19 @@ static struct configfs_attribute *nvmet_ns_attrs[] = { NULL, }; +bool nvmet_subsys_nsid_exists(struct nvmet_subsys *subsys, u32 nsid) +{ + struct config_item *ns_item; + char name[4] = {}; + + if (sprintf(name, "%u", nsid) <= 0) + return false; + mutex_lock(&subsys->namespaces_group.cg_subsys->su_mutex); + ns_item = config_group_find_item(&subsys->namespaces_group, name); + mutex_unlock(&subsys->namespaces_group.cg_subsys->su_mutex); + return ns_item != NULL; +} + static void nvmet_ns_release(struct config_item *item) { struct nvmet_ns *ns = to_nvmet_ns(item); diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c index 8658e9c08534d..7a6b3d37cca70 100644 --- a/drivers/nvme/target/core.c +++ b/drivers/nvme/target/core.c @@ -425,10 +425,13 @@ void nvmet_stop_keep_alive_timer(struct nvmet_ctrl *ctrl) u16 nvmet_req_find_ns(struct nvmet_req *req) { u32 nsid = le32_to_cpu(req->cmd->common.nsid); + struct nvmet_subsys *subsys = nvmet_req_subsys(req); - req->ns = xa_load(&nvmet_req_subsys(req)->namespaces, nsid); + req->ns = xa_load(&subsys->namespaces, nsid); if (unlikely(!req->ns)) { req->error_loc = offsetof(struct nvme_common_command, nsid); + if (nvmet_subsys_nsid_exists(subsys, nsid)) + return NVME_SC_INTERNAL_PATH_ERROR; return NVME_SC_INVALID_NS | NVME_SC_DNR; } diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 6c8acebe1a1a6..477416abf85ab 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -542,6 +542,7 @@ void nvmet_subsys_disc_changed(struct nvmet_subsys *subsys, struct nvmet_host *host); void nvmet_add_async_event(struct nvmet_ctrl *ctrl, u8 event_type, u8 event_info, u8 log_page); +bool nvmet_subsys_nsid_exists(struct nvmet_subsys *subsys, u32 nsid); #define NVMET_QUEUE_SIZE 1024 #define NVMET_NR_QUEUES 128