Hello, We have a Nautilus cluster exhibiting what looks like this bug: https://tracker.ceph.com/issues/39618 No matter what is set as the osd_memory_target (currently 2147483648 ), each OSD process will surpass this value and peak around ~4.0GB then eventually start using swap. Cluster stays stable for about a week and then starts running into OOM issues, kills off OSDs and requires a reboot of each node to get back to a stable state. Has anyone run into similar/workarounds ? Ceph version: 14.2.1, RGW Clients CentOS Linux release 7.6.1810 (Core) Kernel: 3.10.0-957.12.1.el7.x86_64 256GB RAM per OSD node, 60 OSD's in each node.
Thanks,
-- |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com