Re: About scsi device queue depth

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/01/2021 07:23, Hannes Reinecke wrote:

So it seems to me that the block layer is merging more bios per request, as averge sg count per request goes up from 1 - > upto 6 or more. As I see, when queue depth lowers the only thing that is really changing is that we fail more often in getting the budget in scsi_mq_get_budget()->scsi_dev_queue_ready().

So initial sdev queue depth comes from cmd_per_lun by default or manually setting in the driver via scsi_change_queue_depth(). It seems to me that some drivers are not setting this optimally, as above.

Thoughts on guidance for setting sdev queue depth? Could blk-mq changed this behavior?

First of all: are these 'real' SAS SSDs?
The peak at 32 seems very ATA-ish, and I wouldn't put it past the LSI folks to optimize for that case :-) Can you get a more detailed picture by changing the queue depth more finegrained?
(Will get you nicer graphs to boot :-)

They're HUSMM1640ASS204 - not the fastest you can get today, but still decent.

I'll see about fine-grained IOPs vs depth results ...

Cheers,
John



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux