Re: how to set load balance on multi active mds?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

you could benefit from directory pinning [1] or dynamic subtree pinning [2]. We had great results with manual pinning in an older Nautilus cluster, didn't have a chance to test the dynamic subtree pinning yet though. It's difficult to tell in advance which option would suit best your use-case, so you'll probably have to try.

Regards,
Eugen

[1] https://docs.ceph.com/en/reef/cephfs/multimds/#manually-pinning-directory-trees-to-a-particular-rank [2] https://docs.ceph.com/en/reef/cephfs/multimds/#dynamic-subtree-partitioning-with-balancer-on-specific-ranks

Zitat von zxcs <zhuxiongcs@xxxxxxx>:

Hi, experts,

we have a product env build with ceph version 16.2.11 pacific, and using CephFS. Also enable multi active mds(more than 10), but we usually see load unbalance on our client request with these mds. see below picture. the top 1 mds has 32.2k client request. and the last one only 331.

this always lead our cluster into very bad situation. say many MDS report slow requests…
	...
      7 MDSs report slow requests
      1 MDSs behind on trimming
	…


So our question is how to set those mdss load balance? Could any one please help to shed some light here?
Thanks a ton!


Thanks,
xz

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux