Re: ceph mds memory usage 20GB : is it normal ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Did the fs have lots of mount/umount?  We recently found a memory leak
bug in that area https://github.com/ceph/ceph/pull/20148

Regards
Yan, Zheng

On Thu, Mar 22, 2018 at 5:29 PM, Alexandre DERUMIER <aderumier@xxxxxxxxx> wrote:
> Hi,
>
> I'm running cephfs since 2 months now,
>
> and my active msd memory usage is around 20G now (still growing).
>
> ceph     1521539 10.8 31.2 20929836 20534868 ?   Ssl  janv.26 8573:34 /usr/bin/ceph-mds -f --cluster ceph --id 2 --setuser ceph --setgroup ceph
> USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
>
>
> this is on luminous 12.2.2
>
> only tuning done is:
>
> mds_cache_memory_limit = 5368709120
>
>
> (5GB). I known it's a soft limit, but 20G seem quite huge vs 5GB ....
>
>
> Is it normal ?
>
>
>
>
> # ceph daemon mds.2 perf dump mds
> {
>     "mds": {
>         "request": 1444009197,
>         "reply": 1443999870,
>         "reply_latency": {
>             "avgcount": 1443999870,
>             "sum": 1657849.656122933,
>             "avgtime": 0.001148095
>         },
>         "forward": 0,
>         "dir_fetch": 51740910,
>         "dir_commit": 9069568,
>         "dir_split": 64367,
>         "dir_merge": 58016,
>         "inode_max": 2147483647,
>         "inodes": 2042975,
>         "inodes_top": 152783,
>         "inodes_bottom": 138781,
>         "inodes_pin_tail": 1751411,
>         "inodes_pinned": 1824714,
>         "inodes_expired": 7258145573,
>         "inodes_with_caps": 1812018,
>         "caps": 2538233,
>         "subtrees": 2,
>         "traverse": 1591668547,
>         "traverse_hit": 1259482170,
>         "traverse_forward": 0,
>         "traverse_discover": 0,
>         "traverse_dir_fetch": 30827836,
>         "traverse_remote_ino": 7510,
>         "traverse_lock": 86236,
>         "load_cent": 144401980319,
>         "q": 49,
>         "exported": 0,
>         "exported_inodes": 0,
>         "imported": 0,
>         "imported_inodes": 0
>     }
> }
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux