Re: Fwd: MDS memory usage is very high

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, thanks for your response.

This is what I get:

# ceph tell mds.kavehome-mgto-pro-fs01  heap stats
2018-07-19 00:43:46.142560 7f5a7a7fc700  0 client.1318388 ms_handle_reset on 10.22.0.168:6800/1129848128
2018-07-19 00:43:46.181133 7f5a7b7fe700  0 client.1318391 ms_handle_reset on 10.22.0.168:6800/1129848128
mds.kavehome-mgto-pro-fs01 tcmalloc heap stats:------------------------------------------------
MALLOC:     9982980144 ( 9520.5 MiB) Bytes in use by application
MALLOC: +            0 (    0.0 MiB) Bytes in page heap freelist
MALLOC: +    172148208 (  164.2 MiB) Bytes in central cache freelist
MALLOC: +     19031168 (   18.1 MiB) Bytes in transfer cache freelist
MALLOC: +     23987552 (   22.9 MiB) Bytes in thread cache freelists
MALLOC: +     20869280 (   19.9 MiB) Bytes in malloc metadata
MALLOC:   ------------
MALLOC: =  10219016352 ( 9745.6 MiB) Actual memory used (physical + swap)
MALLOC: +   3913687040 ( 3732.4 MiB) Bytes released to OS (aka unmapped)
MALLOC:   ------------
MALLOC: =  14132703392 (13478.0 MiB) Virtual address space used
MALLOC:
MALLOC:          63875              Spans in use
MALLOC:             16              Thread heaps in use
MALLOC:           8192              Tcmalloc page size
------------------------------------------------
Call ReleaseFreeMemory() to release freelist memory to the OS (via madvise()).
Bytes released to the OS take up virtual address space but no physical memory.


I've tried the release command but it keeps using the same memory.

greetings!


2018-07-19 0:25 GMT+02:00 Gregory Farnum <gfarnum@xxxxxxxxxx>:
The MDS think it's using 486MB of cache right now, and while that's
not a complete accounting (I believe you should generally multiply by
1.5 the configured cache limit to get a realistic memory consumption
model) it's obviously a long way from 12.5GB. You might try going in
with the "ceph daemon" command and looking at the heap stats (I forget
the exact command, but it will tell you if you run "help" against it)
and seeing what those say — you may have one of the slightly-broken
base systems and find that running the "heap release" (or similar
wording) command will free up a lot of RAM back to the OS!
-Greg

On Wed, Jul 18, 2018 at 1:53 PM, Daniel Carrasco <d.carrasco@xxxxxxxxx> wrote:
> Hello,
>
> I've created a 3 nodes cluster with MON, MGR, OSD and MDS on all (2 MDS
> actives), and I've noticed that MDS is using a lot of memory (just now is
> using 12.5GB of RAM):
> # ceph daemon mds.kavehome-mgto-pro-fs01 dump_mempools | jq -c '.mds_co';
> ceph daemon mds.kavehome-mgto-pro-fs01 perf dump | jq '.mds_mem.rss'
> {"items":9272259,"bytes":510032260}
> 12466648
>
> I've configured the limit:
> mds_cache_memory_limit = 536870912
>
> But looks like is ignored, because is about 512Mb and is using a lot more.
>
> Is there any way to limit the memory usage of MDS, because is giving a lot
> of troubles because start to swap.
> Maybe I've to limit the cached inodes?
>
> The other active MDS is using a lot less memory (2.5Gb). but also is using
> more than 512Mb. The standby MDS is not using memory it all.
>
> I'm using the version:
> ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous
> (stable).
>
> Thanks!!
> --
> _________________________________________
>
>       Daniel Carrasco Marín
>       Ingeniería para la Innovación i2TIC, S.L.
>       Tlf:  +34 911 12 32 84 Ext: 223
>       www.i2tic.com
> _________________________________________
>
>
>
> --
> _________________________________________
>
>       Daniel Carrasco Marín
>       Ingeniería para la Innovación i2TIC, S.L.
>       Tlf:  +34 911 12 32 84 Ext: 223
>       www.i2tic.com
> _________________________________________
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



--
_________________________________________

      Daniel Carrasco Marín
      Ingeniería para la Innovación i2TIC, S.L.
      Tlf:  +34 911 12 32 84 Ext: 223
      www.i2tic.com
_________________________________________
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux