Re: [PATCH] blk-mq: test QUEUE_FLAG_HCTX_ACTIVE for sbitmap_shared in hctx_may_queue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/01/2021 01:28, Ming Lei wrote:
How many LUNs are involved in above test with 260 depth?
For me, there was 12 SAS SSDs; for convenience here is the cover letter with
details:
https://lore.kernel.org/linux-block/1597850436-116171-1-git-send-email-john.garry@xxxxxxxxxx/

IIRC, for megaraid sas, Kashyap used many more LUNs for testing (64) and
high fio depth (128) but did not reduce .can_queue, topic originally raised
here:
https://lore.kernel.org/linux-block/29f8062c1fccace73c45252073232917@xxxxxxxxxxxxxx/
OK, in both tests, nr_luns are big enough wrt. 260 depth. Maybe that is
why very low IOPS is observed in 'Final(hosttag=1)' with 260 depth.

I'd suggest to run your previous test again after applying this patch,
and see if difference can be observed.

Hi Ming,

I tested and didn't see a noticeable difference with the fix when using the reducing tag queue depth. I got ~500K IOPs with tag queue depth of 260, as opposed to 2M with full tag queue depth. However I was doubtful on this test method before. Regardless, your change and this feature still look proper.

@Kashyap, it would be great if you guys could test this also on that same setup you described previously:

https://lore.kernel.org/linux-block/29f8062c1fccace73c45252073232917@xxxxxxxxxxxxxx/

Thanks,
John



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux