Re: Recommendations for I/O (blk-mq) scheduler for HDDs and SSDs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Patrick,

Some thoughts about blk-mq:

(virtio-blk)
hosts local disk (scsi-mq)
ceph (rbd)
multipath (device mapper; dm / dm-mpath)
  • how to change it: dm_mod.use_blk_mq=y

  • deactivated by default, how to verify: To determine whether DM multipath is using blk-mq on a system, cat the file /sys/block/dm-X/dm/use_blk_mq, where dm-X is replaced by the DM multipath device of interest. This file is read-only and reflects what the global value in /sys/module/dm_mod/parameters/use_blk_mq was at the time the request-based DM multipath device was created. (https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.2_release_notes/storage)

  • I thought it would not make any sense, since iscsi is by definition (network) much slower than SSD/NVMe, which is what blk-mq was generated for, but...: It may be beneficial to set dm_mod.use_blk_mq=y if the underlying SCSI devices are also using blk-mq, as doing so reduces locking overhead at the DM layer. (redhat)

observations
We try in our environment several schedulers but we did't notice a real important improvement, in order to justify a global change in the whole environment. But the best thing is to change/test/document an repeat again and again :)

Hope it helps

Best,



German

2017-12-11 18:17 GMT-03:00 Patrick Fruh <pf@xxxxxxx>:

Hi,

 

after reading a lot about I/O schedulers and performance gains with blk-mq, I switched to a custom 4.14.5 kernel with  CONFIG_SCSI_MQ_DEFAULT enabled to have blk-mq for all devices on my cluster.

 

This allows me to use the following schedulers for HDDs and SSDs:

mq-deadline, kyber, bfq, none

 

I’ve currently set the HDD scheduler to bfq and the SSD scheduler to none, however I’m still not sure if this is the best solution performance-wise.

Does anyone have more experience with this and can maybe give me a recommendation? I’m not even sure if blk-mq is a good idea for ceph, since I haven’t really found anything on the topic.

 

Best,

Patrick


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux