On 10/13/22 13:47, Yoann Moulin wrote:
Also, you mentioned you're using 7 active MDS. How's that working out
for you? Do you use pinning?
I don't really know how to do that, I have 55 worker nodes in my K8s
cluster, each one can run pods that have access to a cephfs pvc. we have
28 cephfs persistent volumes. Pods are ML/DL/AI workload, each can be
start and stop whenever our researchers need it. The workloads are
unpredictable.
See [1] and [2].
Gr. Stefan
[1]:
https://docs.ceph.com/en/quincy/cephfs/multimds/#manually-pinning-directory-trees-to-a-particular-rank
[2]:
https://docs.ceph.com/en/quincy/cephfs/multimds/#setting-subtree-partitioning-policies
Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx