Fwd: MDS memory usage is very high

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I've created a 3 nodes cluster with MON, MGR, OSD and MDS on all (2 MDS actives), and I've noticed that MDS is using a lot of memory (just now is using 12.5GB of RAM):
# ceph daemon mds.kavehome-mgto-pro-fs01 dump_mempools | jq -c '.mds_co'; ceph daemon mds.kavehome-mgto-pro-fs01 perf dump | jq '.mds_mem.rss'
{"items":9272259,"bytes":510032260}
12466648

I've configured the limit:
mds_cache_memory_limit = 536870912

But looks like is ignored, because is about 512Mb and is using a lot more.

Is there any way to limit the memory usage of MDS, because is giving a lot of troubles because start to swap.
Maybe I've to limit the cached inodes?

The other active MDS is using a lot less memory (2.5Gb). but also is using more than 512Mb. The standby MDS is not using memory it all.

I'm using the version:
ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable).

Thanks!!
--
_________________________________________

      Daniel Carrasco Marín
      Ingeniería para la Innovación i2TIC, S.L.
      Tlf:  +34 911 12 32 84 Ext: 223
      www.i2tic.com
_________________________________________



--
_________________________________________

      Daniel Carrasco Marín
      Ingeniería para la Innovación i2TIC, S.L.
      Tlf:  +34 911 12 32 84 Ext: 223
      www.i2tic.com
_________________________________________
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux