The MDS think it's using 486MB of cache right now, and while that's not a complete accounting (I believe you should generally multiply by 1.5 the configured cache limit to get a realistic memory consumption model) it's obviously a long way from 12.5GB. You might try going in with the "ceph daemon" command and looking at the heap stats (I forget the exact command, but it will tell you if you run "help" against it) and seeing what those say — you may have one of the slightly-broken base systems and find that running the "heap release" (or similar wording) command will free up a lot of RAM back to the OS! -Greg On Wed, Jul 18, 2018 at 1:53 PM, Daniel Carrasco <d.carrasco@xxxxxxxxx> wrote: > Hello, > > I've created a 3 nodes cluster with MON, MGR, OSD and MDS on all (2 MDS > actives), and I've noticed that MDS is using a lot of memory (just now is > using 12.5GB of RAM): > # ceph daemon mds.kavehome-mgto-pro-fs01 dump_mempools | jq -c '.mds_co'; > ceph daemon mds.kavehome-mgto-pro-fs01 perf dump | jq '.mds_mem.rss' > {"items":9272259,"bytes":510032260} > 12466648 > > I've configured the limit: > mds_cache_memory_limit = 536870912 > > But looks like is ignored, because is about 512Mb and is using a lot more. > > Is there any way to limit the memory usage of MDS, because is giving a lot > of troubles because start to swap. > Maybe I've to limit the cached inodes? > > The other active MDS is using a lot less memory (2.5Gb). but also is using > more than 512Mb. The standby MDS is not using memory it all. > > I'm using the version: > ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous > (stable). > > Thanks!! > -- > _________________________________________ > > Daniel Carrasco Marín > Ingeniería para la Innovación i2TIC, S.L. > Tlf: +34 911 12 32 84 Ext: 223 > www.i2tic.com > _________________________________________ > > > > -- > _________________________________________ > > Daniel Carrasco Marín > Ingeniería para la Innovación i2TIC, S.L. > Tlf: +34 911 12 32 84 Ext: 223 > www.i2tic.com > _________________________________________ > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com