Hello, On Tue, 23 Jun 2015 12:53:45 +0200 Jan Schermer wrote: > I use CFQ but I have just discovered it completely _kills_ writes when > also reading (doing backfill for example) > I've seen similar things, but for the record and so people can correctly reproduce things, please be specific. For starters, what version of Ceph? CFQ with what kernel, with what filesystem, on what type of OSD (HDD, HDD with on disk journal, HDD with SSD journal)? > If I run a fio job for synchronous writes and at the same time run a fio > job for random reads, writes drop to 10 IOPS (oops!). Setting io > priority with ionice works nicely maintaining ~250 IOPS for writes while > throttling reads. > Setting the priority to what (level and type) on which process? The fio ones, the OSD ones? Scrub and friends can really wreck havoc on one of my cluster which is 99% writes, same goes for the few times it has to do reads (VMs booting). Christian > I looked at osd_disk_thread_ioprio_class - for some reason documentation > says “idle” “rt” “be” for possible values, but it only accepts numbers > (3 should be idle) in my case - and doesn’t seem to do anything in > regards to slow requests. Do I need to restart the OSD for it to take > effect? It actually looks like it made things even worse for me… > > Changing the scheduler to deadline improves the bottom line a a lot for > my benchmark, but large amount of reads can still drop that to 30 IOPS - > contrary to CFQ which maintains steady 250 IOPS for writes even under > read load. > > What would be the recommendation here? Did someone test this extensively > before? > > thanks > > Jan > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Global OnLine Japan/Fusion Communications http://www.gol.com/ _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com