Hi, I was trying to determine performance impact of deep-scrubbing with osd_disk_thread_ioprio_class option set but it looks like it's ignored. Performance (during deep-scrub) is the same with this options set or left with defaults (1/3 of "normal" performance). # ceph --admin-daemon /var/run/ceph/ceph-osd.26.asok config show | grep osd_disk_thread_ioprio "osd_disk_thread_ioprio_class": "idle", "osd_disk_thread_ioprio_priority": "7", # ps -efL | grep 'ce[p]h-osd --cluster=ceph -i 26' | awk '{ print $4; }' | xargs --no-run-if-empty ionice -p | sort | uniq -c 18 unknown: prio 0 186 unknown: prio 4 # cat /sys/class/block/sdf/queue/scheduler noop deadline [cfq] And finallyGDB: Breakpoint 1, ceph_ioprio_string_to_class (s=...) at common/io_priority.cc:48 warning: Source file is more recent than executable. 48 return IOPRIO_CLASS_IDLE; (gdb) cont Continuing. Breakpoint 2, OSD::set_disk_tp_priority (this=0x3398000) at osd/OSD.cc:8548 warning: Source file is more recent than executable. 8548 disk_tp.set_ioprio(cls, cct->_conf->osd_disk_thread_ioprio_priority); (gdb) print cls $1 = -22 So the IO priorities are *NOT*set (cls >= 0). I'm not sure where this -22 came from.Any ideas? In the mean time I'll compile ceph from sources and check again. Ceph installed from Ceph repositories: # ceph-osd -v ceph version 0.86 (97dcc0539dfa7dac3de74852305d51580b7b1f82) # apt-cache policy ceph ceph: Installed: 0.86-1precise Candidate: 0.86-1precise Version table: *** 0.86-1precise 0 500 http://eu.ceph.com/debian-giant/ precise/main amd64 Packages 100 /var/lib/dpkg/status -- PS _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com