Re: Nautilus OSD memory consumption?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Have you tried dumping the mempools?  The memory autotuner will grow or shrink the bluestore caches to try to keep the total OSD process mapped memory just under the target.  If there's a memory leak or some other part of the OSD is using more memory than it should, it will shrink the caches to some base minimum at which point it can't do anything more and the memory usage will exceed the target.  It sounds like you might be hitting that case.  One reason this can happen for example is if you have a huge number of PGs (like many thousands per OSD).


Mark


On 2/25/20 9:02 PM, Nigel Williams wrote:
The OOM-killer is on the rampage and striking down hapless OSDs when
the cluster is under heavy client IO.

The memory target does not seem to be much of a limit, is this intentional?

root@cnx-11:~# ceph-conf --show-config|fgrep osd_memory_target
osd_memory_target = 4294967296
osd_memory_target_cgroup_limit_ratio = 0.800000

root@cnx-31:~# pmap 4327|fgrep total
  total          6794892K

Are there any tips for controlling the OSD memory consumption?

The hosts involved have 128GB or 192GB memory, 12 x OSDs (SATA), so
even with 4GB per OSD there should be a large amount of free memory.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux