Re: About scsi device queue depth

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/01/2021 11:44, Kashyap Desai wrote:
loops = 10000
Is there any effect on random read IOPS when you decrease sdev queue
depth? For sequential IO, IO merge can be enhanced by that way.

Let me check...
John - Can you check your test with rq_affinity = 2  and nomerges=2 ?

I have noticed similar drops (whatever you have reported) -  "once we reach
near pick of sdev or host queue depth, sometimes performance drop due to
contention."
But this behavior keep changing since kernel changes in this area is very
actively happened in past. So I don't know the exact details about kernel
version etc.
I have similar setup (16 SSDs) and I will try similar test on latest kernel.

BTW - I remember that rq_affinity=2 play major role in such issue.  I
usually do testing with rq_affinity = 2.


Hi Kashyap,

As requested:

rq_affinity=1, nomerges=0 (default)

sdev queue depth	num jobs=1
8			1650
16			1638
32			1612
64			1573
254			1435 (default for LLDD)

rq_affinity=2, nomerges=2

sdev queue depth	num jobs=1
8			1236
16			1423
32			1438
64			1438
254			1438 (default for LLDD)

Setup as original: fio read, 12x SAS SSDs

So, again, we see that performance changes from changing sdev queue depth depends on workload and then also other queue config.

Thanks,
John



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux