On 12/01/2021 11:44, Kashyap Desai wrote:
loops = 10000
Is there any effect on random read IOPS when you decrease sdev queue
depth? For sequential IO, IO merge can be enhanced by that way.
Let me check...
John - Can you check your test with rq_affinity = 2 and nomerges=2 ?
I have noticed similar drops (whatever you have reported) - "once we reach
near pick of sdev or host queue depth, sometimes performance drop due to
contention."
But this behavior keep changing since kernel changes in this area is very
actively happened in past. So I don't know the exact details about kernel
version etc.
I have similar setup (16 SSDs) and I will try similar test on latest kernel.
BTW - I remember that rq_affinity=2 play major role in such issue. I
usually do testing with rq_affinity = 2.
Hi Kashyap,
As requested:
rq_affinity=1, nomerges=0 (default)
sdev queue depth num jobs=1
8 1650
16 1638
32 1612
64 1573
254 1435 (default for LLDD)
rq_affinity=2, nomerges=2
sdev queue depth num jobs=1
8 1236
16 1423
32 1438
64 1438
254 1438 (default for LLDD)
Setup as original: fio read, 12x SAS SSDs
So, again, we see that performance changes from changing sdev queue
depth depends on workload and then also other queue config.
Thanks,
John