[ceph-admin@mds0 ~]$ ps aux | grep ceph-mds
ceph 1841 3.5 94.3 133703308 124425384 ? Ssl Apr04 1808:32 /usr/bin/ceph-mds -f --cluster ceph --id mds0 --setuser ceph --setgroup ceph
[ceph-admin@mds0 ~]$ sudo ceph daemon mds.mds0 cache status
{
"pool": {
"items": 173261056,
"bytes": 76504108600
}
}
So, 80GB is my configured limit for the cache and it appears the mds is following that limit. But, the mds process is using over 100GB RAM in my 128GB host. I thought I was playing it safe by configuring at 80. What other things consume a lot of RAM for this process?
Let me know if I need to create a new thread.
On Thu, May 10, 2018 at 12:40 PM, Patrick Donnelly <pdonnell@xxxxxxxxxx> wrote:
Hello Brady,
On Thu, May 10, 2018 at 7:35 AM, Brady Deetz <bdeetz@xxxxxxxxx> wrote:
> I am now seeing the exact same issues you are reporting. A heap release did
> nothing for me.
I'm not sure it's the same issue...
> [root@mds0 ~]# ceph daemon mds.mds0 config get mds_cache_memory_limit
> {
> "mds_cache_memory_limit": "80530636800"
> }
80G right? What was the memory use from `ps aux | grep ceph-mds`?
> [root@mds0 ~]# ceph daemon mds.mds0 perf dump
> {
> ...
> "inode_max": 2147483647,
> "inodes": 35853368,
> "inodes_top": 23669670,
> "inodes_bottom": 12165298,
> "inodes_pin_tail": 18400,
> "inodes_pinned": 2039553,
> "inodes_expired": 142389542,
> "inodes_with_caps": 831824,
> "caps": 881384,
Your cap count is 2% of the inodes in cache; the inodes pinned 5% of
the total. Your cache should be getting trimmed assuming the cache
size (as measured by the MDS, there are fixes in 12.2.5 which improve
its precision) is larger than your configured limit.
If the cache size is larger than the limit (use `cache status` admin
socket command) then we'd be interested in seeing a few seconds of the
MDS debug log with higher debugging set (`config set debug_mds 20`).
--
Patrick Donnelly
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com