Re: [PATCH] blk-mq: plug request for shared sbitmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 18/05/2021 13:51, John Garry wrote:
n 18/05/2021 13:00, Ming Lei wrote:
'Before 620K' could be caused by busy queue when batching submission isn't
applied, so merge chance is increased. This patch applies batching
submission, so queue becomes not busy enough.

BTW, what is the queue depth of sdev and can_queue of shost for your hisilision SAS?
sdev queue depth is 64 (see hisi_sas_slave_configure()) and host depth is
4096 - 96 (for reserved tags) = 4000
OK, so queue depth should get IO merged if there are too many requests
queued.

What is the same read test result without shared tags? still 430K?

I never left a driver switch in place to disable it.

I can forward-port "reply-map" support, which is not too difficult and I will let you know the result.

The 'after' results are similar to without shared sbitmap, i.e using reply-map:

reply-map:
450K (read), 430K IOPs (randread)

For reference, with shared shared sbitmap:
Before 620K (read), 300K IOPs (randread)
After  460K (read), 430K (randread)*

These are all mq-deadline.

* I mixed read and randread result earlier by accident


And what is your exact read test script? And how many cpu cores in
your system?


Thanks,
John



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux