On 12/3/19 10:00 PM, Keith Busch wrote: >> If the controllers returns Command Interrupted too many times, and nvme_req(req)->retries >> runs down, this results in a device resource error returned to the block layer. But I think we'll >> have this problem with any error. > > Why is the controller returning the same error so many times? Are we > not waiting the requested delay timed? If so, the controller told us > retrying should be successful. Yes, this a problem on the controller... but I only did this to test the pathological case. I think we can all agree that if the controller is going to continually return Command Interrupted, the controller is broken. > It is possible we kick the requeue list early if one command error > has a valid CRD, but a subsequent retryable command does not. Is that > what's happening? Yes, as Hannes said, in the current code: NVME_SC_CMD_INTERRUPTED is not handled in nvme_error_status() so it's translated as: default: return BLK_STS_IOERR; This works fine with a single controller, but when REQ_NVME_MPATH is set the code goes down the nvme_failover_req() path, which doesn't handle NVME_SC_CMD_INTERRUPTED either, and we end up with: default: /* * Reset the controller for any non-ANA error as we don't know * what caused the error. */ nvme_reset_ctrl(ns->ctrl); break; } So, the first time a controller with REQ_NVME_MPATH enabled returns NVME_SC_CMD_INTERRUPTED it gets a controller reset. > I'm just concerned because if we just skip counting the retry, a broken > device could have the driver retry the same command indefinitely, which > often leaves a task in an uninterruptible sleep state forever. No, I'm not recommending that we skip retries. My diff was not a part of this patch. I agree that it's not safe to skip retry counting. >> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c >> index 9696404a6182..24dc9ed1a11b 100644 >> --- a/drivers/nvme/host/core.c >> +++ b/drivers/nvme/host/core.c >> @@ -230,6 +230,8 @@ static blk_status_t nvme_error_status(u16 status) >> return BLK_STS_NEXUS; >> case NVME_SC_HOST_PATH_ERROR: >> return BLK_STS_TRANSPORT; >> + case NVME_SC_CMD_INTERRUPTED: >> + return BLK_STS_DEV_RESOURCE; > > Just for the sake of keeping this change isloted to nvme, perhaps use an > existing blk_status_t value that already maps to not path error, like > BLK_STS_TARGET. I can make that change... but I think BLK_STS_DEV_RESOURCE might be, semantically, a better choice. [BLK_STS_TARGET] = { -EREMOTEIO, "critical target" }, [BLK_STS_DEV_RESOURCE] = { -EBUSY, "device resource" }, The one use case we have for NVME_SC_CMD_INTERRUPTED in the Linux NVMe-oF target is a resource allocation failure (e.g. ENOMEM). I think Hannes came across this once while he was prototyping the ANA code in the Linux NVMe-oF target. Another potential use case in the controller might be deadlock avoidance. I was experimenting with NVME_SC_CMD_INTERRUPTED in my controller as a QOS mechanism.... but I don't think NVME_SC_CMD_INTERRUPTED /CRD is well suited for that use case. This that's how I created the pathological error case in my test. Either way, I don't think that running out of retries when NVME_SC_CMD_INTERRUPTED Is returned a critical target error. Moreover, it appears BLK_STS_TARGET is, everywhere, related to some kind of LBA range error. /John