Hi Christoph,
I am currently using Nautilus on a ceph cluster with osd_memory_target
defined in ceph.conf on each node.
By running :
ceph config get osd.40 osd_memory_target
you get the default value for the parameter osd_memory_target
(4294967296 for nautilus)
If you change the ceph.conf file and restart the osd service as you
said, it is REALLY working, but you must check it with the command:
ceph config show osd.40
that will output several lines and the one you are interested in:
NAME VALUE SOURCE OVERRIDES IGNORES
...
osd_memory_target 1073741824 file
...
indicating the value you have specified in the ceph.conf file.
You can try again...
Didier
On 5/6/21 10:32 AM, Christoph Adomeit wrote:
It looks that I have solved the issue.
I tried:
ceph.conf
[osd]
osd_memory_target = 1073741824
systemctl restart ceph-osd.target
when i run
ceph config get osd.40 osd_memory_target it returns:
4294967296
so this did not work.
Next I tried:
ceph tell osd.* injectargs '--osd_memory_target 1073741824'
and ceph returns:
ceph config get osd.40 osd_memory_target
4294967296
So this also dir not work in 14.2.20
Next I tried:
ceph config set osd/class:hdd osd_memory_target 1073741824
and that finally worked.
I also slowly increased the memory and so far I use:
ceph config set osd/class:hdd osd_memory_target 2147483648
for now.
Thanks
Christoph
On Wed, May 05, 2021 at 04:30:17PM +0200, Christoph Adomeit wrote:
I manage a historical cluster of severak ceph nodes with each 128 GB Ram and 36 OSD each 8 TB size.
The cluster ist just for archive purpose and performance is not so important.
The cluster was running fine for long time using ceph luminous.
Last week I updated it to Debian 10 and Ceph Nautilus.
Now I can see that the memory usage of each osd grows slowly to 4 GB each and once the system has
no memory left it will oom-kill processes
I have already configured osd_memory_target = 1073741824 .
This helps for some hours but then memory usage will grow from 1 GB to 4 GB per OSD.
Any ideas what I can do to further limit osd memory usage ?
It would be good to keep the hardware running some more time without upgrading RAM on all
OSD machines.
Any Ideas ?
Thanks
Christoph
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx