It's a bit of a kludge, but just failing the active mgr on a regular
schedule works around this issue (which we see also on our 17.2.5
cluster). We just have a cron job that fails the active mgr every 24
hours - it seems to get up to ~30G, then drop back to 10-15G once it
goes to backup mode.
Simon Fowler
On 23/5/23 22:14, Tobias Hachmer wrote:
Hi Eugen,
Am 5/23/23 um 12:50 schrieb Eugen Block:
there was a thread [1] just a few weeks ago. Which mgr modules are
enabled in your case? Also the mgr caps seem to be relevant here.
[1]
https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/message/BKP6EVZZHJMYG54ZW64YABYV6RLPZNQO/
thanks for the hint and link. We actually use the restful module and
modified the mgr caps to use zabbix monitoring. I now have reverted
the mgr caps to default and will observe the memory usage. I think w
ran into the same issue here.
Thanks
Tobias
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx