Re: Limiting osd process memory use in nautilus.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



As of 13.2.3, you should use 'osd_memory_target' instead of
'bluestore_cache_size'
--
Adam


On Tue, Apr 16, 2019 at 10:28 AM Jonathan Proulx <jon@xxxxxxxxxxxxx> wrote:
>
> Hi All,
>
> I have a a few servers that are a bit undersized on RAM for number of
> osds they run.
>
> When we swithced to bluestore about 1yr ago I'd "fixed" this (well
> kept them from OOMing) by setting bluestore_cache_size_ssd and
> bluestore_cache_size_hdd, this worked.
>
> after upgrading to Nautilus the OSDs again are running away and OOMing
> out.
>
> I noticed osd_memory_target_cgroup_limit_ratio": "0.800000" so tried
> setting 'MemoryHigh' and 'MemoryMax' in the unit file. But the osd
> process still happily runs right upto that line and lets the OS deal
> with it (and it deals harshly).
>
> currently I have:
>
>     "bluestore_cache_size": "0",
>     "bluestore_cache_size_hdd": "1073741824",
>     "bluestore_cache_size_ssd": "1073741824",
>
> and
>         MemoryHigh=2560M
>         MemoryMax=3072M
>
> and processes keep running right upto that 3G line and getting smacked
> down which is causing performance issues as they thrash and I suspect
> some scrub issues I've seen recently.
>
> I guess my next traw to grab at is to set "bluestore_cache_size" but
> is there something I'm missing here?
>
> Thanks,
> -Jon
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux