About scsi device queue depth

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I was looking at some IOMMU issue on a LSI RAID 3008 card, and noticed that performance there is not what I get on other SAS HBAs - it's lower.

After some debugging and fiddling with sdev queue depth in mpt3sas driver, I am finding that performance changes appreciably with sdev queue depth:

sdev qdepth	fio number jobs* 	1	10	20
16					1590	1654	1660
32					1545	1646	1654
64					1436	1085	1070
254 (default)				1436	1070	1050

fio queue depth is 40, and I'm using 12x SAS SSDs.

I got comparable disparity in results for fio queue depth = 128 and num jobs = 1:

sdev qdepth	fio number jobs* 	1	
16					1640
32					1618	
64					1577	
254 (default)				1437	

IO sched = none.

That driver also sets queue depth tracking = 1, but never seems to kick in.

So it seems to me that the block layer is merging more bios per request, as averge sg count per request goes up from 1 - > upto 6 or more. As I see, when queue depth lowers the only thing that is really changing is that we fail more often in getting the budget in scsi_mq_get_budget()->scsi_dev_queue_ready().

So initial sdev queue depth comes from cmd_per_lun by default or manually setting in the driver via scsi_change_queue_depth(). It seems to me that some drivers are not setting this optimally, as above.

Thoughts on guidance for setting sdev queue depth? Could blk-mq changed this behavior?

Thanks,
John



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux