Re: ceph mds memory usage 20GB : is it normal ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Brady,

On Thu, May 10, 2018 at 7:35 AM, Brady Deetz <bdeetz@xxxxxxxxx> wrote:
> I am now seeing the exact same issues you are reporting. A heap release did
> nothing for me.

I'm not sure it's the same issue...

> [root@mds0 ~]# ceph daemon mds.mds0 config get mds_cache_memory_limit
> {
>     "mds_cache_memory_limit": "80530636800"
> }

80G right? What was the memory use from `ps aux | grep ceph-mds`?

> [root@mds0 ~]# ceph daemon mds.mds0 perf dump
> {
> ...
>         "inode_max": 2147483647,
>         "inodes": 35853368,
>         "inodes_top": 23669670,
>         "inodes_bottom": 12165298,
>         "inodes_pin_tail": 18400,
>         "inodes_pinned": 2039553,
>         "inodes_expired": 142389542,
>         "inodes_with_caps": 831824,
>         "caps": 881384,

Your cap count is 2% of the inodes in cache; the inodes pinned 5% of
the total. Your cache should be getting trimmed assuming the cache
size (as measured by the MDS, there are fixes in 12.2.5 which improve
its precision) is larger than your configured limit.

If the cache size is larger than the limit (use `cache status` admin
socket command) then we'd be interested in seeing a few seconds of the
MDS debug log with higher debugging set (`config set debug_mds 20`).

-- 
Patrick Donnelly
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux