Re: About scsi device queue depth

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2021-01-11 11:40 a.m., James Bottomley wrote:
On Mon, 2021-01-11 at 16:21 +0000, John Garry wrote:
Hi,

I was looking at some IOMMU issue on a LSI RAID 3008 card, and
noticed that performance there is not what I get on other SAS HBAs -
it's lower.

After some debugging and fiddling with sdev queue depth in mpt3sas
driver, I am finding that performance changes appreciably with sdev
queue depth:

sdev qdepth	fio number jobs* 	1	10	20
16					1590	1654	1660
32					1545	1646	1654
64					1436	1085	1070
254 (default)				1436	1070	1050

fio queue depth is 40, and I'm using 12x SAS SSDs.

I got comparable disparity in results for fio queue depth = 128 and
num jobs = 1:

sdev qdepth	fio number jobs* 	1	
16					1640
32					1618	
64					1577	
254 (default)				1437	

IO sched = none.

That driver also sets queue depth tracking = 1, but never seems to
kick in.

So it seems to me that the block layer is merging more bios per
request, as averge sg count per request goes up from 1 - > upto 6 or
more. As I see, when queue depth lowers the only thing that is really
changing is that we fail more often in getting the budget in
scsi_mq_get_budget()->scsi_dev_queue_ready().

So initial sdev queue depth comes from cmd_per_lun by default or
manually setting in the driver via scsi_change_queue_depth(). It
seems to me that some drivers are not setting this optimally, as
above.

Thoughts on guidance for setting sdev queue depth? Could blk-mq
changed this behavior?

In general, for spinning rust, you want the minimum queue depth

"Spinning rust" is starting to wear a bit thin. In power electronics
(almost) pure silicon is on the way out (i.e. becoming 'legacy').
It is being replaced by Silicon Carbide and Gallium Nitride.
What goes around, comes around :-)

Doug Gilbert

possible for keeping the device active because merging is a very
important performance enhancement and once the drive is fully occupied
simply sending more tags won't improve latency.  We used to recommend a
depth of about 4 for this reason.  A co-operative device can help you
find the optimal by returning QUEUE_FULL when it's fully occupied so we
have a mechanism to track the queue full returns and change the depth
interactively.

For high iops devices, these considerations went out of the window and
it's generally assumed (without varying evidence) the more tags the
better.  SSDs have a peculiar lifetime problem in that when they get
erase block starved they start behaving more like spinning rust in that
they reach a processing limit but only for writes, so lowering the
write queue depth (which we don't even have a knob for) might be a good
solution.  Trying to track the erase block problem has been a constant
bugbear.

I'm assuming you're using spinning rust in the above, so it sounds like
the firmware in the card might be eating the queue full returns.  I
could see this happening in RAID mode, but it shouldn't happen in jbod
mode.

James






[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux