Hi Mark
Thank you for you explanations! Some numbers of this example osd below.
Cheers
Harry
From dump mempools:
"buffer_anon": {
"items": 29012,
"bytes": 4584503367
},
From perf dump:
"prioritycache": {
"target_bytes": 3758096384,
"mapped_bytes": 7146692608,
"unmapped_bytes": 3825983488,
"heap_bytes": 10972676096,
"cache_bytes": 134217728
},
"prioritycache:data": {
"pri0_bytes": 0,
"pri1_bytes": 0,
"pri2_bytes": 0,
"pri3_bytes": 0,
"pri4_bytes": 0,
"pri5_bytes": 0,
"pri6_bytes": 0,
"pri7_bytes": 0,
"pri8_bytes": 0,
"pri9_bytes": 0,
"pri10_bytes": 0,
"pri11_bytes": 0,
"reserved_bytes": 67108864,
"committed_bytes": 67108864
},
"prioritycache:kv": {
"pri0_bytes": 0,
"pri1_bytes": 0,
"pri2_bytes": 0,
"pri3_bytes": 0,
"pri4_bytes": 0,
"pri5_bytes": 0,
"pri6_bytes": 0,
"pri7_bytes": 0,
"pri8_bytes": 0,
"pri9_bytes": 0,
"pri10_bytes": 0,
"pri11_bytes": 0,
"reserved_bytes": 67108864,
"committed_bytes": 67108864
},
"prioritycache:meta": {
"pri0_bytes": 0,
"pri1_bytes": 0,
"pri2_bytes": 0,
"pri3_bytes": 0,
"pri4_bytes": 0,
"pri5_bytes": 0,
"pri6_bytes": 0,
"pri7_bytes": 0,
"pri8_bytes": 0,
"pri9_bytes": 0,
"pri10_bytes": 0,
"pri11_bytes": 0,
"reserved_bytes": 67108864,
"committed_bytes": 67108864
},
On 20.05.20 14:05, Mark Nelson wrote:
Hi Harald,
Any idea what the priority_cache_manger perf counters show? (or you
can also enable debug osd / debug priority_cache_manager) The osd
memory autotuning works by shrinking the bluestore and rocksdb caches
to some target value to try and keep the mapped memory of the process
bellow the osd_memory_target. In some cases it's possible that
something other than the caches are using the memory (usually pglog)
or there's tons of pinned stuff in the cache that for some reason
can't be evicted. Knowing the cache tuning stats might help tell if
it's trying to shrink the caches and can't for some reason or if
there's something else going on.
Thanks,
Mark
On 5/20/20 6:10 AM, Harald Staub wrote:
As a follow-up to our recent memory problems with OSDs (with high
pglog values:
https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/LJPJZPBSQRJN5EFE632CWWPK3UMGG3VF/#XHIWAIFX4AXZK5VEFOEBPS5TGTH33JZO
), we also see high buffer_anon values. E.g. more than 4 GB, with
"osd memory target" set to 3 GB. Is there a way to restrict it?
As it is called "anon", I guess that it would first be necessary to
find out what exactly is behind this?
Well maybe it is just as Wido said, with lots of small objects,
there will be several problems.
Cheers
Harry
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx