> -----Original Message----- > From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of > Florian Haas > Sent: 13 March 2017 10:09 > To: Dan van der Ster <dan@xxxxxxxxxxxxxx> > Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx> > Subject: Re: osd_disk_thread_ioprio_priority help > > On Mon, Mar 13, 2017 at 11:00 AM, Dan van der Ster > <dan@xxxxxxxxxxxxxx> wrote: > >> I'm sorry, I may have worded that in a manner that's easy to > >> misunderstand. I generally *never* suggest that people use CFQ on > >> reasonably decent I/O hardware, and thus have never come across any > >> need to set this specific ceph.conf parameter. > > > > OTOH, cfq *does* help our hammer clusters. deadline's default > > behaviour is to delay writes up to 5 seconds if the disk is busy > > reading -- which it is, of couse, while deep scrubbing. And deadline > > does not offer any sort of fairness between processes accessing the > > same disk (which is admittedly less of an issue in jewel). But back in > > hammer days it was nice to be able to make the disk threads only read > > while the disk was otherwise idle. > > Thanks for pointing out the default 5000-ms write deadline. We frequently > tune that down to 1500ms. Disabling front merges also sometimes seems to > help. > > For the archives: those settings are in > /sys/block/*/queue/iosched/{write_expire,front_merges} and can be > persisted on Debian/Ubuntu with sysfsutils. Also it may be of some interest that in Linux 4.10 there is new background priority writeback functionality https://kernelnewbies.org/Linux_4.10#head-f6ecae920c0660b7f4bcee913f2c71a859 dcc184 I've found this makes quite a big difference to read latency if the cluster is under a heavy writes and the WBthrottle allows 5000 IO's to queue up by default. > > Cheers, > Florian > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com