I was going through the hardware recommendations for a customer and
wanted to cite the memory section from the current docs [1]:
Setting the osd_memory_target below 2GB is not recommended. eph may
fail to keep the memory consumption under 2GB and extremely slow
performance is likely.
Setting the memory target between 2GB and 4GB typically works but
may result in degraded performance: metadata may need to be read
from disk during IO unless the active data set is relatively small.
4GB is the current default value for osd_memory_target This default
was chosen for typical use cases, and is intended to balance RAM
cost and OSD performance.
Setting the osd_memory_target higher than 4GB can improve
performance when there many (small) objects or when large (256GB/OSD
or more) data sets are processed. This is especially true with fast
NVMe OSDs.
And further:
We recommend budgeting at least 20% extra memory on your system to
prevent OSDs from going OOM (Out Of Memory) during temporary spikes
or due to delay in the kernel reclaiming freed pages.
[1] https://docs.ceph.com/en/quincy/start/hardware-recommendations/#memory
Zitat von Eugen Block <eblock@xxxxxx>:
Hi,
I don't think increasing the PGs has an impact on the OSD's memory,
at least I'm not aware of such reports and haven't seen it myself.
But your cluster could get in trouble as it already is, only 24 GB
for 16 OSDs is too low. It can work (and apparently does) when
everything is calm, but during recovery the memory usage spikes. The
default is 4 GB per OSD, there have been several reports over the
years where users couldn't get their OSDs back up after a failure
because of low memory settings. I'd recommend to increase RAM.
Regards,
Eugen
Zitat von Nicola Mori <mori@xxxxxxxxxx>:
Dear Ceph user,
I'm wondering how much an increase of PG number would impact on the
memory occupancy of OSD daemons. In my cluster I currently have 512
PGs and I would like to increase it to 1024 to mitigate some disk
occupancy issues, but having machines with low amount of memory
(down to 24 GB for 16 OSDs) I fear this could kill my cluster. Is
it possible to evaluate the relative increase in OSD memory
footprint when doubling the number of PGs (hopefully not a linear
scaling)? Or is there a way to experiment without crashing
everything?
Thank you.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx