CephFS standby-replay has more dns/inos/dirs than the active mds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We have a cluster using multiple filesystems on Pacific (16.2.7) and even though we have mds_cache_memory_limit set to 80 GiB one of the MDS daemons is using 123.1 GiB.  This MDS is actually the standby-replay MDS and I'm wondering if it's because it's using more dns/inos/dirs than the active MDS?:

$ sudo ceph fs status cephfs19
cephfs19 - 28 clients
========
RANK      STATE           MDS          ACTIVITY     DNS    INOS   DIRS   CAPS
 0        active      ceph006b  Reqs: 2879 /s  27.8M  27.8M  3490k  7767k
0-s   standby-replay  ceph008a  Evts: 1446 /s  40.1M  40.0M  6259k     0

Shouldn't the standby-replay MDS daemons have similar stats to the active MDS they're protecting?  What could be causing this to happen?

Thanks,
Bryan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux