Re: [PATCH V5 8/8] blk-mq: improve bio merge from blk-mq sw queue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 03, 2017 at 02:21:43AM -0700, Christoph Hellwig wrote:
> This looks generally good to me, but I really worry about the impact
> on very high iops devices.  Did you try this e.g. for random reads
> from unallocated blocks on an enterprise NVMe SSD?

Looks no such impact, please see the following data
in the fio test(libaio, direct, bs=4k, 64jobs, randread, none scheduler)

[root@storageqe-62 results]# ../parse_fio 4.14.0-rc2.no_blk_mq_perf+-nvme-64jobs-mq-none.log 4.14.0-rc2.BLK_MQ_PERF_V5+-nvme-64jobs-mq-none.log
---------------------------------------
 IOPS(K)  |    NONE     |    NONE
---------------------------------------
randread  |      650.98 |      653.15
---------------------------------------

OR:

If you worry about this impact, can we simply disable merge on NVMe
for none scheduler? It is basically impossible to merge NVMe's
request/bio when none is used, but it can be doable in case of kyber
scheduler.

-- 
Ming

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel



[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux