Hello ceph community, Our ceph cluster is in the version 16.2.4. Our cluster is around 1 year old. All of the sudden MDS demons went into the failed state and also it is showing that 1 filesystem is in degraded state. We tried to restart the demons then it came into the running state and then within 30 seconds it again went into a failed state. Currently we are not seeing any client IO on the cluster. We tried to restart the hosts but again MDS demons are going to fail state. Here is the current health message 3 failed cephadm daemon(s) 1 filesystem is degraded insufficient standby MDS daemons available 1 MDSs behind on trimming 1 filesystem is online with fewer MDS than max_mds Our cluster overview No of MDS - 3 No of MONs - 5 No of MGRs - 5 No of OSDs - 18 (on 9 hosts) If any additional details are required about cluster configuration i'll try to provide. Any help would be appreciated. Thanks, Santhosh. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx