IO scheduler & osd_disk_thread_ioprio_class

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I use CFQ but I have just discovered it completely _kills_ writes when also reading (doing backfill for example)

If I run a fio job for synchronous writes and at the same time run a fio job for random reads, writes drop to 10 IOPS (oops!). Setting io priority with ionice works nicely maintaining ~250 IOPS for writes while throttling reads.

I looked at osd_disk_thread_ioprio_class - for some reason documentation says “idle” “rt” “be” for possible values, but it only accepts numbers (3 should be idle) in my case - and doesn’t seem to do anything in regards to slow requests. Do I need to restart the OSD for it to take effect? It actually looks like it made things even worse for me…

Changing the scheduler to deadline improves the bottom line a a lot for my benchmark, but large amount of reads can still drop that to 30 IOPS - contrary to CFQ which maintains steady 250 IOPS for writes even under read load.

What would be the recommendation here? Did someone test this extensively before?

thanks

Jan

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux