Hello Jesper, On Thu, Apr 16, 2020 at 4:06 AM <jesper@xxxxxxxx> wrote: > > Hi. > > I have a cluster that has been running for close to 2 years now - pretty > much with the same setting, but over the past day I'm seeing this warning. > > (and the cache seem to keep growing) - Can I figure out which clients is > accumulating the inodes? > > Ceph 12.2.8 - is it ok just to "bump" the memory to say 128GB - any > negative sideeffects? > > jk@ceph-mon1:~$ sudo ceph health detail > HEALTH_WARN 1 MDSs report oversized cache; 3 clients failing to respond to > cache pressure > MDS_CACHE_OVERSIZED 1 MDSs report oversized cache > mdsceph-mds1(mds.0): MDS cache is too large (91GB/32GB); 34400070 > inodes in use by clients, 3293 stray files Can you share the client list? Use the `ceph tell mds.foo session ls` command. -- Patrick Donnelly, Ph.D. He / Him / His Senior Software Engineer Red Hat Sunnyvale, CA GPG: 19F28A586F808C2402351B93C3301A3E258DD79D _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx