Re: mgr memory usage constantly increasing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
there was a thread [1] just a few weeks ago. Which mgr modules are enabled in your case? Also the mgr caps seem to be relevant here.

[1] https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/message/BKP6EVZZHJMYG54ZW64YABYV6RLPZNQO/

Zitat von Tobias Hachmer <t.hachmer@xxxxxx>:

Hello list,

we have found that the active mgr process in our 3-node CEPH cluster takes a lot of memory. After start the memory usage is constantly increasing. After 6 days the process takes ~67GB:

~# ps -p 7371 -o rss,%mem,cmd
  RSS %MEM CMD
71053880 26.9 /usr/bin/ceph-mgr -n mgr.hostname.nvwzhc -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false

Cluster Specs:
- 3-node 3-way mirror
- each node has 256GB memory
- each node has 10x 7.68TB NVMe, each NVMe is split into 2 OSDs
- monitoring node is a separate VM (separate Hypervisor)
- iSCSI and NFS Gateway are located on 2 separate VMs (separate Hypervisor)
- main purpose is CephFS with currently ~15.42 million objects

Is this normal behaviour or might we have a misconfiguration somewhere?

What can we do to dig into this further?

~# ceph status
  cluster:
    id:     f5129939-964b-11ed-bb6a-f7caa5af2f56
    health: HEALTH_OK

  services:
    mon:         3 daemons, quorum host1,host2,host3 (age 6d)
    mgr:         host2.nvwzhc(active, since 6d), standbys: host1.wwczzn
    mds:         1/1 daemons up, 1 standby, 1 hot standby
    osd:         60 osds: 60 up (since 6d), 60 in (since 8w)
    tcmu-runner: 2 portals active (2 hosts)

  data:
    volumes: 1/1 healthy
    pools:   6 pools, 2161 pgs
    objects: 15.42M objects, 45 TiB
    usage:   135 TiB used, 75 TiB / 210 TiB avail
    pgs:     2161 active+clean

Thanks and kind regards
Tobias Hachmer


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux