Re: Out of Memory after Upgrading to Nautilus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Cristoph,


1GB per OSD is tough!  the osd memory target only shrinks the size of the caches but can't control things like osd map size, pg log length, rocksdb wal buffers, etc.  It's a "best effort" algorithm to try to fit the OSD mapped memory into that target but on it's own it doesn't really do well below 2GB/OSD (and even that can be tough when only adjusting the caches).  That's one of the reasons the default is 4GB.  To fit in 1GB you'll probably also need to reduce some of the previously mentioned things but there will be consequences (slower recovery, higher write amplification in rocksdb, etc).  By default a bluestore OSD typically won't fit into a 1GB memory target and we don't regularly test configurations with that little memory per OSD.


You might want to look at the memory pool performance counters, the priority cache performance counters, and the tcmalloc heap stats to help figure out where the memory is actually being used.


Mark


On 5/5/21 9:30 AM, Christoph Adomeit wrote:
I manage a historical cluster of severak ceph nodes with each 128 GB Ram and 36 OSD each 8 TB size.

The cluster ist just for archive purpose and performance is not so important.

The cluster was running fine for long time using ceph luminous.

Last week I updated it to Debian 10 and Ceph Nautilus.

Now I can see that the memory usage of each osd grows slowly to 4 GB each and once the system has
no memory left it will oom-kill processes

I have already configured osd_memory_target = 1073741824 .
This helps for some hours but then memory usage will grow from 1 GB to 4 GB per OSD.

Any ideas what I can do to further limit osd memory usage ?

It would be good to keep the hardware running some more time without upgrading RAM on all
OSD machines.

Any Ideas ?

Thanks
   Christoph
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux