Re: [PATCH 1/2] nvme: pci: simplify timeout handling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 11, 2018 at 5:18 AM, Keith Busch
<keith.busch@xxxxxxxxxxxxxxx> wrote:
> On Fri, May 11, 2018 at 05:10:40AM +0800, Ming Lei wrote:
>> On Fri, May 11, 2018 at 5:05 AM, Keith Busch
>> <keith.busch@xxxxxxxxxxxxxxx> wrote:
>> > On Fri, May 11, 2018 at 04:52:11AM +0800, Ming Lei wrote:
>> >> Hi Keith,
>> >>
>> >> On Tue, May 8, 2018 at 11:30 PM, Keith Busch <keith.busch@xxxxxxxxx> wrote:
>> >> > On Sat, Apr 28, 2018 at 11:50:17AM +0800, Ming Lei wrote:
>> >> >> This sync may be raced with one timed-out request, which may be handled
>> >> >> as BLK_EH_HANDLED or BLK_EH_RESET_TIMER, so the above sync queues can't
>> >> >> work reliably.
>> >> >
>> >> > Ming,
>> >> >
>> >> > As proposed, that scenario is impossible to encounter. Resetting the
>> >> > controller inline with the timeout reaps all the commands, and then
>> >> > sets the controller state to RESETTING. While blk-mq may not allow the
>> >> > driver to complete those requests, having the driver sync with the queues
>> >> > will hold the controller in the reset state until blk-mq is done with
>> >> > its timeout work; therefore, it is impossible for the NVMe driver to
>> >> > return "BLK_EH_RESET_TIMER", and all commands will be completed through
>> >> > nvme_timeout's BLK_EH_HANDLED exactly as desired.
>> >>
>> >> That isn't true for multiple namespace case,  each request queue has its
>> >> own timeout work, and all these timeout work can be triggered concurrently.
>> >
>> > The controller state is most certainly not per queue/namespace. It's
>> > global to the controller. Once the reset is triggered, nvme_timeout can
>> > only return EH_HANDLED.
>>
>> It is related with EH_HANDLED, please see the following case:
>>
>> 1) when req A from N1 is timed out, nvme_timeout() handles
>> it as EH_HANDLED: nvme_dev_disable() and reset is scheduled.
>>
>> 2) when req B from N2 is timed out, nvme_timeout() handles
>> it as EH_HANDLED, then nvme_dev_disable() is called exactly
>> when reset is in-progress, so queues become quiesced, and nothing
>> can move on in the resetting triggered by N1.
>
> Huh? The nvme_sync_queues ensures that doesn't happen. That was the
> whole point.

Sorry, forget to mention, it isn't enough to simply sync timeout inside reset().

Another tricky thing is about freeze & unfreeze, now freeze is done in
nvme_dev_disable(), and unfreeze is done in nvme_reset_work. That means
we have to make sure both are paired, otherwise queues may be kept as
freeze for ever.

This issue is covered by my V4 & V5 too.

thanks,
Ming Lei



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux