ceph mds recommended config

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello. 
I have a relatively new ceph installation that is only running ceph FS at the moment.  We are seeing intermittent issues where "ceph -s" is reporting "MDS report slow requests" and sometimes the MDS crash and take a while to recover/replay or we have to manually restart an mds service to get the state back to HEALTH_OK.

Is there any documentation for recommended configuration?  

Here is our cluster setup:
35 total nodes, 88 cores, 512GB ram, 100Gb network
2 ceph fs data pools, 1 is all ssd, the other is nvme
3 active MDS, 1 pinned to the nvme pool/dir, 1 pinned to another large directory, and the third has no pinning
2 standby MDS

ceph config dump:
mds         advanced  mds_beacon_grace           60.000000
mds         basic     mds_cache_memory_limit     68719476736
mds         advanced  mds_cache_trim_threshold   65536
mds         advanced  mds_recall_max_decay_rate  2.000000

Please let me know if more info is required.

Thanks!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux