On Fri, May 11, 2018 at 5:05 AM, Keith Busch <keith.busch@xxxxxxxxxxxxxxx> wrote: > On Fri, May 11, 2018 at 04:52:11AM +0800, Ming Lei wrote: >> Hi Keith, >> >> On Tue, May 8, 2018 at 11:30 PM, Keith Busch <keith.busch@xxxxxxxxx> wrote: >> > On Sat, Apr 28, 2018 at 11:50:17AM +0800, Ming Lei wrote: >> >> This sync may be raced with one timed-out request, which may be handled >> >> as BLK_EH_HANDLED or BLK_EH_RESET_TIMER, so the above sync queues can't >> >> work reliably. >> > >> > Ming, >> > >> > As proposed, that scenario is impossible to encounter. Resetting the >> > controller inline with the timeout reaps all the commands, and then >> > sets the controller state to RESETTING. While blk-mq may not allow the >> > driver to complete those requests, having the driver sync with the queues >> > will hold the controller in the reset state until blk-mq is done with >> > its timeout work; therefore, it is impossible for the NVMe driver to >> > return "BLK_EH_RESET_TIMER", and all commands will be completed through >> > nvme_timeout's BLK_EH_HANDLED exactly as desired. >> >> That isn't true for multiple namespace case, each request queue has its >> own timeout work, and all these timeout work can be triggered concurrently. > > The controller state is most certainly not per queue/namespace. It's > global to the controller. Once the reset is triggered, nvme_timeout can > only return EH_HANDLED. It is related with EH_HANDLED, please see the following case: 1) when req A from N1 is timed out, nvme_timeout() handles it as EH_HANDLED: nvme_dev_disable() and reset is scheduled. 2) when req B from N2 is timed out, nvme_timeout() handles it as EH_HANDLED, then nvme_dev_disable() is called exactly when reset is in-progress, so queues become quiesced, and nothing can move on in the resetting triggered by N1. Thanks, Ming Lei