I am now trying to run tests to see how mclock_client queue works on mimic. But when I tried to config tag (r,w,l) of each client, I found there are no options to distinguish different clients.
All I got are following options for mclock_opclass, which are used to distinguish different types of operations.
[root@ceph-node1 ~]# ceph daemon osd.0 config show | grep mclock
"osd_op_queue": "mclock_opclass",
"osd_op_queue_mclock_client_op_lim": "100.000000",
"osd_op_queue_mclock_client_op_res": "100.000000",
"osd_op_queue_mclock_client_op_wgt": "500.000000",
"osd_op_queue_mclock_osd_subop_lim": "0.000000",
"osd_op_queue_mclock_osd_subop_res": "1000.000000",
"osd_op_queue_mclock_osd_subop_wgt": "500.000000",
"osd_op_queue_mclock_recov_lim": "0.001000",
"osd_op_queue_mclock_recov_res": "0.000000",
"osd_op_queue_mclock_recov_wgt": "1.000000",
"osd_op_queue_mclock_scrub_lim": "100.000000",
"osd_op_queue_mclock_scrub_res": "100.000000",
"osd_op_queue_mclock_scrub_wgt": "500.000000",
"osd_op_queue_mclock_snap_lim": "0.001000",
"osd_op_queue_mclock_snap_res": "0.000000",
"osd_op_queue_mclock_snap_wgt": "1.000000"
I am wondering if ceph mimic provide any configuration interfaces for mclock_client queue?
|