Hi Philippe,
Have you looked at the mempool stats yet?
ceph daemon osd.NNN dump_mempools
You may also want to look at the heap stats, and potentially enable
debug 5 for bluestore to see what the priority cache manager is doing.
Typically in these cases we end up seeing a ton of memory used by
something and the priority cache manager is trying to compensate by
shrinking the caches, but you won't really know until you start looking
at the various statistics and logging.
Mark
On 10/28/19 2:54 AM, Philippe D'Anjou wrote:
Hi,
we are seeing quite a high memory usage by OSDs since Nautilus.
Averaging 10GB/OSD for 10TB HDDs. But I had OOM issues on 128GB
Systems because some single OSD processes used up to 32%.
Here an example how they look on average: https://i.imgur.com/kXCtxMe.png
Is that normal? I never seen this on luminous. Memory leaks?
Using all default values, OSDs have no special configuration. Use case
is cephfs.
v14.2.4 on Ubuntu 18.04 LTS
Seems a bit high?
Thanks for help
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com