Re: How to config mclock_client queue?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



So I do not think mclock_client queue works the way you’re hoping it does. For categorization purposes it joins the operation class and the client identifier with the intent that that will execute operations among clients more evenly (i.e., it won’t favor one client over another).

However, it was not designed for per-client distinct configurations, which is what it seems that you’re after.

I started an effort to update librados (and the path all the way back to the OSDs) to allow per-client QoS configuration. However I got pulled off of that for other priorities. I believe Mark Kogan is working on that as he has time. That might be closer to what you’re after. See: https://github.com/ceph/ceph/pull/20235 .

Eric

> On Mar 26, 2019, at 8:14 AM, Wang Chuanwen <mos_wendy@xxxxxxxxxxx> wrote:
> 
> I am now trying to run tests to see how mclock_client queue works on mimic. But when I tried to config tag (r,w,l) of each client, I found there are no options to distinguish different clients.
> All I got are following options for mclock_opclass, which are used to distinguish different types of operations.
> 
> [root@ceph-node1 ~]# ceph daemon osd.0 config show | grep mclock
> "osd_op_queue": "mclock_opclass",
> "osd_op_queue_mclock_client_op_lim": "100.000000",
> "osd_op_queue_mclock_client_op_res": "100.000000",
> "osd_op_queue_mclock_client_op_wgt": "500.000000",
> "osd_op_queue_mclock_osd_subop_lim": "0.000000",
> "osd_op_queue_mclock_osd_subop_res": "1000.000000",
> "osd_op_queue_mclock_osd_subop_wgt": "500.000000",
> "osd_op_queue_mclock_recov_lim": "0.001000",
> "osd_op_queue_mclock_recov_res": "0.000000",
> "osd_op_queue_mclock_recov_wgt": "1.000000",
> "osd_op_queue_mclock_scrub_lim": "100.000000",
> "osd_op_queue_mclock_scrub_res": "100.000000",
> "osd_op_queue_mclock_scrub_wgt": "500.000000",
> "osd_op_queue_mclock_snap_lim": "0.001000",
> "osd_op_queue_mclock_snap_res": "0.000000",
> "osd_op_queue_mclock_snap_wgt": "1.000000"
> 
> I am wondering if ceph mimic provide any configuration interfaces for mclock_client queue?
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux