Hello Pedro, This is a known bug in standby-replay MDS. Please see the links below and patiently wait for the resolution. Restarting the standby-replay MDS will clear the warning with zero client impact, and realistically, that's the only thing (besides disabling the standby-replay MDS completely) that you can do. https://tracker.ceph.com/issues/40213 https://github.com/ceph/ceph/pull/48483 On Tue, Sep 19, 2023 at 6:51 AM Pedro Lopes <pavila@xxxxxxxxxxx> wrote: > So I'm getting this warning (although there are no noticeable problems in > the cluster): > > $ ceph health detail > HEALTH_WARN 1 MDSs report oversized cache > [WRN] MDS_CACHE_OVERSIZED: 1 MDSs report oversized cache > mds.storefs-b(mds.0): MDS cache is too large (7GB/4GB); 0 inodes in > use by clients, 0 stray files > > Ceph FS status: > > $ ceph fs status > storefs - 20 clients > ======= > RANK STATE MDS ACTIVITY DNS INOS DIRS > CAPS > 0 active storefs-a Reqs: 0 /s 1385k 1385k 113k > 193k > 0-s standby-replay storefs-b Evts: 0 /s 3123k 3123k 33.5k 0 > > POOL TYPE USED AVAIL > storefs-metadata metadata 19.4G 12.6T > storefs-pool4x data 4201M 9708G > storefs-pool2x data 2338G 18.9T > MDS version: ceph version 17.2.5 > (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable) > > What is telling me? Is it just that case of the cache size needing to be > bigger? Is it a problem with the clients holding onto some kind of > reference (documentation says this can be a cause, but now how to check for > it). > > Thanks in advance, > Pedro Lopes > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > -- Alexander E. Patrakov _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx