Re: [PATCH] blk-mq: plug request for shared sbitmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 19/05/2021 01:21, Ming Lei wrote:
The 'after' results are similar to without shared sbitmap, i.e using
reply-map:

reply-map:
450K (read), 430K IOPs (randread)
OK, that is expected result. After shared sbitmap, IO merge gets improved
when batching submission is bypassed, meantime IOPS of random IO drops
because cpu utilization is increased.

So that isn't a regression, let's live with this awkward situation,:-(

Well at least we have ~ parity with non-shared sbitmap now. And also know higher performance is possible for "read" (vs "randread") scenario, FWIW.

BTW, recently we have seen 2x optimisation/improvement for shared sbitmap which were under/related to nr_hw_queues == 1 check - this patch and the changing of the default IO sched.

I am wondering how you detected/analyzed this issue, and whether we need to audit other nr_hw_queues == 1 checks? I did a quick scan, and the only possible thing I see is the other q->nr_hw_queues > 1 check for direct issue in blk_mq_subit_bio() - I suspect you know more about that topic.

Thanks,
John




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux