Re: [LSF/MM/BPF TOPIC] NVMe HDD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 19, 2020 at 01:53:53AM +0000, Damien Le Moal wrote:
> On 2020/02/19 10:32, Ming Lei wrote:
> > On Wed, Feb 19, 2020 at 02:41:14AM +0900, Keith Busch wrote:
> >> On Tue, Feb 18, 2020 at 10:54:54AM -0500, Tim Walker wrote:
> >>> With regards to our discussion on queue depths, it's common knowledge
> >>> that an HDD choses commands from its internal command queue to
> >>> optimize performance. The HDD looks at things like the current
> >>> actuator position, current media rotational position, power
> >>> constraints, command age, etc to choose the best next command to
> >>> service. A large number of commands in the queue gives the HDD a
> >>> better selection of commands from which to choose to maximize
> >>> throughput/IOPS/etc but at the expense of the added latency due to
> >>> commands sitting in the queue.
> >>>
> >>> NVMe doesn't allow us to pull commands randomly from the SQ, so the
> >>> HDD should attempt to fill its internal queue from the various SQs,
> >>> according to the SQ servicing policy, so it can have a large number of
> >>> commands to choose from for its internal command processing
> >>> optimization.
> >>
> >> You don't need multiple queues for that. While the device has to fifo
> >> fetch commands from a host's submission queue, it may reorder their
> >> executuion and completion however it wants, which you can do with a
> >> single queue.
> >>  
> >>> It seems to me that the host would want to limit the total number of
> >>> outstanding commands to an NVMe HDD
> >>
> >> The host shouldn't have to decide on limits. NVMe lets the device report
> >> it's queue count and depth. It should the device's responsibility to
> > 
> > Will NVMe HDD support multiple NS? If yes, this queue depth isn't
> > enough, given all NSs share this single host queue depth.
> > 
> >> report appropriate values that maximize iops within your latency limits,
> >> and the host will react accordingly.
> > 
> > Suppose NVMe HDD just wants to support single NS and there is single queue,
> > if the device just reports one host queue depth, block layer IO sort/merge
> > can only be done when there is device saturation feedback provided.
> > 
> > So, looks either NS queue depth or per-NS device saturation feedback
> > mechanism is needed, otherwise NVMe HDD may have to do internal IO
> > sort/merge.
> 
> SAS and SATA HDDs today already do internal IO reordering and merging, a
> lot. That is partly why even with "none" set as the scheduler, you can see
> iops increasing with QD used.

That is why I asked if NVMe HDD will attempt to sort/merge IO among SQs
from the beginning, but Tim said no, see:

https://lore.kernel.org/linux-block/20200212215251.GA25314@ming.t460p/T/#m2d0eff5ef8fcaced0f304180e571bb8fefc72e84

It could be cheap for NVMe HDD to do that, given all queues/requests
just stay in system's RAM.

Also I guess internal IO sort/merge may not be good enough compared with
SW's implementation:

1) device internal queue depth is often low, and the participated requests won't
be enough many, but SW's scheduler queue depth is often 2 times of
device queue depth.

2) HDD drive doesn't have context info, so when concurrent IOs are run from
multiple contexts, HDD internal reorder/merge can't work well enough. blk-mq
doesn't address this case too, however the legacy IO path does consider that
via IOC batch.


Thanks, 
Ming




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux