Hi Danny,
Are your arm binaries built using tcmalloc? At least on x86 we saw
significantly higher memory fragmentation and memory usage with glibc
malloc.
First, you can look at the mempool stats which may provide a hint:
ceph daemon osd.NNN dump_mempools
Assuming you are using tcmalloc and have the cache autotuning enabled,
you can also enable debug_bluestore = "5" and debug_prioritycache = "5"
on one of the OSDs that using lots of memory. Look for the lines
containing "cache_size" "tune_memory target". Those will tell you how
much of your memory is being devoted for bluestore caches and how it's
being divided up between kv, buffer, and rocksdb block cache.
Mark
On 8/1/19 4:25 AM, dannyyang(杨耿丹) wrote:
H all:
we have a cephfs env,ceph version is 12.2.10,server in arm,but fuse clients are x86,osd disk size is 8T,some osd use 12GB memory,is that normal?
------------------------------------------------------------------------
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com