Throttle down rebalance with Quincy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I'm adding OSDs to a 5 node cluster using Quincy 17.2.5. The network is a bonded 2x10G link. The issue I'm having is that the rebalance operation seems to impact client I/O and running VMs do not . OSDs are big 6'4TB NVMe drives, so there will be a lot of data to move.

With previous versions it was easy to throttle down the rebalance with "ceph config set osd osd_max_backfills", but as Quincy uses MClock, those values are not used. In fact, default values are overridden with 1000.

If I'm understanding the MClock behavior it will use the estimated osd_mclock_max_capacity_iops_ssd (benchmarked at OSD deploy time) and allow client/rebalance/backfill/trims/scrubs I/O to fill the drive with IOPs up to what is defined in osd_mclock_profile (default value is high_client_ops). Am I correct?

How could I throttle down the rebalance so it gives more headroom for client I/O?

Many thanks in advance.


--
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux