Re: [PATCH 4/4] scsi: core: don't limit per-LUN queue depth for SSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If we ignore the RAID controller use case where the controller
internally queues and arbitrates commands between many devices

These controllers should not be "ignored", but rather enabled. Many of them visualize both HDD and NVMe devices behind them, and thus forced to expose themselves as SCSI controllers.
However, they have their own queue management and IO merging capabilities. Many have capability of holding IO in queue and pull them as needed (just like NVMe), and thus does not bother if many IOs to a device or controller is sent or if there is congestion. In case of congestion, the IO will simply wait in queue, along with advanced timeout handling capabilities.
Besides, as Ming pointed out, Block layer (function hctx_may_queue) already limits IO on a per controller and per LUN basis.

Overall, if the proposal does not work for all cases, then at least it should be made optional for high end controller, so that they are not disadvantaged vis-a-vis NVMe, just because they expose themselves as SCSI in order to support a wide range of devices behind them.

thanks,
Sumanesh

On 11/21/2019 7:59 PM, Martin K. Petersen wrote:
Ming,

I don't understand the motivation of ramp-up/ramp-down, maybe it is just
for fairness among LUNs.
Congestion control. Devices have actual, physical limitations that are
different from the tag context limitations on the HBA. You don't have
that problem on NVMe because (at least for PCIe) the storage device and
the controller are one and the same.

If you submit 100000 concurrent requests to a SCSI drive that does 100
IOPS, some requests will time out before they get serviced.
Consequently we have the ability to raise and lower the queue depth to
constrain the amount of requests in flight to a given device at any
point in time.

Also, devices use BUSY/QUEUE_FULL/TASK_SET_FULL to cause the OS to back
off. We frequently see issues where the host can submit burst I/O much
faster than the device can de-stage from cache. In that scenario the
device reports BUSY/QF/TSF and we will back off so the device gets a
chance to recover. If we just let the application submit new I/O without
bounds, the system would never actually recover.

Note that the actual, physical limitations for how many commands a
target can handle are typically much, much lower than the number of tags
the HBA can manage. SATA devices can only express 32 concurrent
commands. SAS devices typically 128 concurrent commands per
port. Arrays differ.

If we ignore the RAID controller use case where the controller
internally queues and arbitrates commands between many devices, how is
submitting 1000 concurrent requests to a device which only has 128
command slots going to work?

Some HBAs have special sauce to manage BUSY/QF/TSF, some don't. If we
blindly stop restricting the number of I/Os in flight in the ML, we may
exceed either the capabilities of what the transport protocol can
express or internal device resources.




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux