On Mon, May 25, 2020 at 10:01:18PM -0700, Dongli Zhang wrote: > > > On 5/20/20 4:56 AM, Ming Lei wrote: > > During waiting for in-flight IO completion in reset handler, timeout > > Does this indicate the window since nvme_start_queues() in nvme_reset_work(), > that is, just after the queues are unquiesced again? Right, nvme_start_queues() starts to dispatch requests again, and nvme_wait_freeze() waits completion of all these in-flight IOs. > > If v2 is required in the future, how about to mention the specific function to > that it would be much more easier to track the issue. Not sure it is needed, cause it is quite straightforward. > > > or controller failure still may happen, then the controller is deleted > > and all inflight IOs are failed. This way is too violent. > > > > Improve the reset handling by replacing nvme_wait_freeze with query > > & check controller. If all ns queues are frozen, the controller is reset > > successfully, otherwise check and see if the controller has been disabled. > > If yes, break from the current recovery and schedule a fresh new reset. > > > > This way avoids to failing IO & removing controller unnecessarily. > > > > Cc: Christoph Hellwig <hch@xxxxxx> > > Cc: Sagi Grimberg <sagi@xxxxxxxxxxx> > > Cc: Keith Busch <kbusch@xxxxxxxxxx> > > Cc: Max Gurtovoy <maxg@xxxxxxxxxxxx> > > Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx> > > --- > > drivers/nvme/host/pci.c | 37 ++++++++++++++++++++++++++++++------- > > 1 file changed, 30 insertions(+), 7 deletions(-) > > > > diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c > > index ce0d1e79467a..b5aeed33a634 100644 > > --- a/drivers/nvme/host/pci.c > > +++ b/drivers/nvme/host/pci.c > > @@ -24,6 +24,7 @@ > > #include <linux/io-64-nonatomic-lo-hi.h> > > #include <linux/sed-opal.h> > > #include <linux/pci-p2pdma.h> > > +#include <linux/delay.h> > > > > #include "trace.h" > > #include "nvme.h" > > @@ -1235,9 +1236,6 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved) > > * shutdown, so we return BLK_EH_DONE. > > */ > > switch (dev->ctrl.state) { > > - case NVME_CTRL_CONNECTING: > > - nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_DELETING); > > - /* fall through */ > > case NVME_CTRL_DELETING: > > dev_warn_ratelimited(dev->ctrl.device, > > "I/O %d QID %d timeout, disable controller\n", > > @@ -2393,7 +2391,8 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown) > > u32 csts = readl(dev->bar + NVME_REG_CSTS); > > > > if (dev->ctrl.state == NVME_CTRL_LIVE || > > - dev->ctrl.state == NVME_CTRL_RESETTING) { > > + dev->ctrl.state == NVME_CTRL_RESETTING || > > + dev->ctrl.state == NVME_CTRL_CONNECTING) { > > freeze = true; > > nvme_start_freeze(&dev->ctrl); > > } > > @@ -2504,12 +2503,29 @@ static void nvme_remove_dead_ctrl(struct nvme_dev *dev) > > nvme_put_ctrl(&dev->ctrl); > > } > > > > +static bool nvme_wait_freeze_and_check(struct nvme_dev *dev) > > +{ > > + bool frozen; > > + > > + while (true) { > > + frozen = nvme_frozen(&dev->ctrl); > > + if (frozen) > > + break; > > ... and how about to comment that the below is because of nvme timeout handler > as explained in another email (if v2 would be sent) so that it is not required > to query for "online_queues" with cscope :) > > > + if (!dev->online_queues) > > + break; > > + msleep(5); Fine. Thanks, Ming