(no subject)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello there,
I'm trying to reduce recovery impact on client operations and using mclock for this purpose. I've tested different weights for queues but didn't see any impacts on real performance.

ceph version 12.2.8  luminous (stable)

Last tested config:
    "osd_op_queue": "mclock_opclass",
    "osd_op_queue_cut_off": "high",
    "osd_op_queue_mclock_client_op_lim": "0.000000",
    "osd_op_queue_mclock_client_op_res": "1.000000",
    "osd_op_queue_mclock_client_op_wgt": "1000.000000",
    "osd_op_queue_mclock_osd_subop_lim": "0.000000",
    "osd_op_queue_mclock_osd_subop_res": "1.000000",
    "osd_op_queue_mclock_osd_subop_wgt": "1000.000000",
    "osd_op_queue_mclock_recov_lim": "0.000000",
    "osd_op_queue_mclock_recov_res": "1.000000",
    "osd_op_queue_mclock_recov_wgt": "1.000000",
    
Is it feature really working? Am I doing something wrong?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux