Re: question about OSD onode hits ratio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Check to see what your osd_memory_target is set to.  The default 4GB is generally a decent starting point, but if you have a large active data set you might benefit from increasing the amount of memory available to the OSDs.  They'll generally prefer giving it to the onode cache first if it's hot.

*Note:  In some container based deployments the osd_memory_target might be getting set automatically based on the container limit (and possibly based on the memory available in the node).


Mark


On 8/2/23 11:25 PM, Ben wrote:
Hi,
We have a cluster running for a while. From grafana ceph dashboard, I saw
OSD onode hits ratio 92% when cluster was just up and running. After couple
month, it says now 70%. This is not a good trend I think. Just wondering
what should be done to stop this trend.

Many thank,
Ben
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

--
Best Regards,
Mark Nelson
Head of R&D (USA)

Clyso GmbH
p: +49 89 21552391 12
a: Loristraße 8 | 80335 München | Germany
w: https://clyso.com | e: mark.nelson@xxxxxxxxx

We are hiring: https://www.clyso.com/jobs/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux