On 7/11/23 09:44, Luis Domingues wrote:
"bluestore-pricache": {
"target_bytes": 6713193267,
"mapped_bytes": 6718742528,
"unmapped_bytes": 467025920,
"heap_bytes": 7185768448,
"cache_bytes": 4161537138
},
Hi Luis,
Looks like the mapped bytes for this OSD process is very close to (just
a little over) the target bytes that has been set when you did the perf
dump. There is some unmapped memory that can be reclaimed by the kernel,
but we can't force the kernel to reclaim it. It could be that the
kernel is being a little lazy if there isn't memory pressure.
The way the memory autotuning works in Ceph is that periodically the
prioritycache system will look at the mapped memory usage of the
process, then grow/shrink the aggregate size of the in-memory caches to
try and stay near the target. It's reactive in nature, meaning that it
can't completely control for spikes. It also can't shrink the caches
below a small minimum size, so if there is a memory leak it will help to
an extent but can't completely fix it. Once the aggregate memory size
is decided on, it goes through a process of looking at how hot the
different caches are and assigns memory based on where it thinks the
memory would be most useful. Again this is based on mapped memory
though. It can't force the kernel to reclaim memory that has already
been released.
Thanks,
Mark
--
Best Regards,
Mark Nelson
Head of R&D (USA)
Clyso GmbH
p: +49 89 21552391 12
a: Loristraße 8 | 80335 München | Germany
w: https://clyso.com | e: mark.nelson@xxxxxxxxx
We are hiring: https://www.clyso.com/jobs/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx